A Comparison of Selection Schemes used in Genetic Algorithms

نویسندگان

  • Tobias Blickle
  • Lothar Thiele
چکیده

Genetic Algorithms are a common probabilistic optimization method based on the model of natural evolution. One important operator in these algorithms is the selection scheme for which a new description model is introduced in this paper. With this a mathematical analysis of tournament selection, truncation selection, linear and exponential ranking selection and proportional selection is carried out that allows an exact prediction of the tness values after selection. The further analysis derives the selection intensity, selection variance, and the loss of diversity for all selection schemes. For completion a pseudo-code formulation of each method is included. The selection schemes are compared and evaluated according to their properties leading to an uni ed view of these di erent selection schemes. Furthermore the correspondence of binary tournament selection and ranking selection in the expected tness distribution is proven. Foreword This paper is the revised and extended version of the TIK-Report No. 11 from April, 1995. The main additions to the rst edition are the analysis of exponential ranking selection and proportional selection. Proportional selection is only included for completeness we believe that it is a very unsuited selection method and we will show this (like it has be done by other researchers, too) based on a mathematical analysis in chapter 7. Furthermore for each selection scheme a pseudo-code notation is given and a short remark on time complexity is included. The main correction concerns the approximation formula for the selection variance of tournament selection. The approximation given in the rst edition was completely wrong. In this report the approximation formula is derived by a genetic algorithm, or better speaking by the genetic programming optimization method. The used method is described in appendix A and also applied to derive an analytic approximation for the selection intensity and selection variance of exponential ranking selection. We hope that this report summarizes the most important facts for these ve selection schemes and gives all researches a well founded basis to chose the appropriate selection scheme for their purpose. Tobias Blickle Z urich, Dec., 1995 1 Contents 1 Introduction 4 2 Description of Selection Schemes 6 2.1 Average Fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Fitness Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5 Selection Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.6 Selection Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 Tournament Selection 14 3.1 Concatenation of Tournament Selection . . . . . . . . . . . . . . . 17 3.2 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 Selection Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.5 Selection Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4 Truncation Selection 23 4.1 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3 Selection Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.4 Selection Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5 Linear Ranking Selection 27 5.1 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.2 Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.3 Selection Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.4 Selection Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6 Exponential Ranking Selection 34 6.1 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.3 Selection Intensity and Selection Variance . . . . . . . . . . . . . 38 2 7 Proportional Selection 40 7.1 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.2 Selection Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 41 8 Comparison of Selection Schemes 43 8.1 Reproduction Rate and Universal Selection . . . . . . . . . . . . . 43 8.2 Comparison of the Selection Intensity . . . . . . . . . . . . . . . . 46 8.3 Comparison of Loss of Diversity . . . . . . . . . . . . . . . . . . . 47 8.4 Comparison of the Selection Variance . . . . . . . . . . . . . . . . 48 8.5 The Complement Selection Schemes: Tournament and Linear Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 9 Conclusion 52 A Deriving Approximation Formulas Using Genetic Programming 53 A.1 Approximating the Selection Variance of Tournament Selection . . 54 A.2 Approximating the Selection Intensity of Exponential Ranking Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 A.3 Approximating the Selection Variance of Exponential Ranking Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 B Used Integrals 60 C Glossary 61 3 Chapter 1 Introduction Genetic Algorithms (GA) are probabilistic search algorithms characterized by the fact that a number N of potential solutions (called individuals Ji 2 J, where J represents the space of all possible individuals) of the optimization problem simultaneously sample the search space. This population P = fJ1; J2; :::; JNg is modi ed according to the natural evolutionary process: after initialization, selection ! : JN 7! JN and recombination : JN 7! JN are executed in a loop until some termination criterion is reached. Each run of the loop is called a generation and P ( ) denotes the population at generation . The selection operator is intended to improve the average quality of the population by giving individuals of higher quality a higher probability to be copied into the next generation. Selection thereby focuses the search on promising regions in the search space. The quality of an individual is measured by a tness function f : J 7! R. Recombination changes the genetic material in the population either by crossover or by mutation in order to exploit new points in the search space. The balance between exploitation and exploration can be adjusted either by the selection pressure of the selection operator or by the recombination operator, e.g. by the probability of crossover. As this balance is critical for the behavior of the GA it is of great interest to know the properties of the selection and recombination operators to understand their in uence on the convergence speed. Some work has been done to classify the di erent selection schemes such as proportionate selection, ranking selection, tournament selection. Goldberg [Goldberg and Deb, 1991] introduced the term of takeover time. The takeover time is the number of generations that is needed for a single best individual to ll up the whole generation if no recombination is used. Recently Back [Back, 1994] has analyzed the most prominent selection schemes used in Evolutionary Algorithms with respect to their takeover time. In [M uhlenbein and SchlierkampVoosen, 1993] the selection intensity in the so called Breeder Genetic Algorithm (BGA) is used to measure the progress in the population. The selection intensity is derived for proportional selection and truncation selection. De la Maza and Tidor [de la Maza and Tidor, 1993] analyzed several selection methods according 4 to their scale and translation invariance. An analysis based on the behavior of the best individual (as done by Goldberg and Back) or on the average population tness (as done by M uhlenbein) only describes one aspect of a selection method. In this paper a selection scheme is described by its interaction on the distribution of tness values. Out of this description several properties can be derived, e.g. the behavior of the best or average individual. The description is introduced in the next chapter. In chapter 3 an analysis of the tournament selection is carried out and the properties of the tournament selection are derived. The subsequent chapters deal with truncation selection, ranking selection, and exponential ranking selection. Chapter 7 is devoted to proportional selection that represents some kind of exception to the other selection schemes analyzed in this paper. Finally all selection schemes are compared. 5 Chapter 2 Description of Selection Schemes In this chapter we introduce a description of selection schemes that will be used in the subsequent chapters to analyze and compare several selection schemes, namely tournament selection, truncation selection, and linear and exponential ranking selection and tness proportional selection. The description is based on the tness distribution of the population before and after selection as introduced in [Blickle and Thiele, 1995]. It is assumed that selection and recombination are done sequentially: rst a selection phase creates an intermediate population P 0( ) and then recombination is performed with a certain probability pc on the individuals of this intermediate population to get the population for the next generation (Fig. 2.1). Recombination includes crossover and mutation or any other operator that changes the \genetic material". This kind of description di ers from the common paradigms where selection is made to obtain the individuals for recombination [Goldberg, 1989; Koza, 1992]. But it is mathematically equivalent and allows to analyze the selection method separately. For selection only the tness values of the individuals are taken into account. Hence, the state of the population is completely described by the tness values of all individuals. There exist only a nite number of di erent tness values f1; :::; fn(n N) and the state of the population can as well be described by the values s(fi) that represent the number of occurrences of the tness value fi in the population. De nition 2.0.1 (Fitness distribution) The function s : R 7! Z+ 0 assigns to each tness value f 2 R the number of individuals in a population P 2 JN carrying this tness value. s is called the tness distribution of a population P . The characterization of the population by its tness distribution has also been used by other researches, but in a more informal way. In [M uhlenbein and Schlierkamp-Voosen, 1993] the tness distribution is used to calculate some properties of truncation selection. In [Shapiro et al., 1994] a statistical mechanics approach is taken to describe the dynamics of a Genetic Algorithm that makes use of tness distributions, too. 6 Selection (whole population) Randomly created Initial Population End Yes No Problem solved ? Recombination p 1-p c c Figure 2.1: Flowchart of the Genetic Algorithm. It is possible to describe a selection method as a function that transforms a tness distribution into another tness distribution. De nition 2.0.2 (Selection method) A selection method is a function that transforms a tness distribution s into an new tness distribution s0: s0 = (s; par list) (2.1) par list is an optional parameter list of the selection method. As the selection methods are probabilistic we will often make use of the expected tness distribution. De nition 2.0.3 (Expected tness distribution) denotes the expected tness distribution after applying the selection method to the tness distribution s, i.e. (s; par list) = E( (s; par list)) (2.2) The notation s = (s; par list) will be used as abbreviation. 7 It is interesting to note that it is also possible to calculate the variance of the resulting distribution. Theorem 2.0.1 The variance in obtaining the tness distribution s0 is 2 s = s 1 s N (2.3) Proof: s (fi) denotes the expected number of individuals with tness value fi after selection. It is obtained by doing N experiments \select an individual from the population using a certain selection mechanism". Hence the selection probability of an individual with tness value fi is given by pi = s (fi) N . To each tness value there exists a Bernoulli trial \an individual with tness fi is selected". As the variance of a Bernoulli trial with N trials is given by 2 = Np(1 p), (2.3) is obtained using pi. 2 The index s in s stands for \sampling" as it is the mean variance due to the sampling of the nite population. The variance of (2.3) is obtained by performing the selection method in N independent experiments. It is possible to reduce the variance almost completely by using more sophisticated sampling algorithms to select the individuals. We will introduce Baker's \stochastic universal sampling" algorithm (SUS) [Baker, 1987], which is an optimal sampling algorithm when we compare the di erent selection schemes in chapter 8. De nition 2.0.4 (Cumulative tness distribution) Let n be the number of unique tness values and f1 < ::: < fn 1 < fn (n N) the ordering of the tness values with f1 denoting the worst tness occurring in the population and fn denoting the best tness in the population. S(fi) denotes the number of individuals with tness value fi or worse and is called cumulative tness distribution, i.e. S(fi) = 8><>: 0 : i < 1 Pj=i j=1 s(fj) : 1 i n N : i > n (2.4) Example 2.0.1 As an example of a discrete tness distribution we use the initial tness distribution of the \wall-following-robot" from Koza [Koza, 1992]. This distribution is typical of problems solved by genetic programming (many bad and only very few good individuals exist). Figure 2.2 shows the distribution s(f) (left) and the cumulative distribution S(f) (right). We will now describe the distribution s(f) as a continuous distribution s(f) allowing the following properties to be easily derived. To do so, we assume 8 2.5 5 7.5 10 12.5 15 f 0 100 200 300 400 500 600 s(f) 2.5 5 7.5 10 12.5 15 f 0 200 400 600 800 1000 S(f) Figure 2.2: The tness distribution s(f) and the cumulative tness distribution S(f) for the \wall-following-robot" problem. continuous distributed tness values. The range of the function s(f) is f0 < f fn, using the same notation as in the discrete case. We denote all functions in the continuous case with a bar, e.g. we write s(f) instead of s(f). Similar sums are replaced by integrals, for example S(f) = Z f f0 s(x) dx (2.5) denotes the continuous cumulative tness distribution. Example 2.0.2 As an example for a continuous tness distribution we chose the Gaussian distribution G( ; ) with G( ; )(x) = 1 p2 e (x )2 2 2 (2.6) The distribution sG(f) = NG( ; )(f) with = 30; = 100; N = 1000 and f0 = 1; fn = +1 is shown in the interesting region f 2 [0; 200] in Figure 2.3 (left). The right graph in this gure shows the cumulative tness distribution SG(f). We will now introduce the aspects of the tness distribution we want to compare. The de nitions given will all refer to continuous distributed tness values. 2.1 Average Fitness De nition 2.1.1 (Average tness) M denotes the average tness of the population before selection and M denotes the expected average tness after selection: M = 1 N Z fn f0 s(f) f df (2.7) M = 1 N Z fn f0 s (f) f df (2.8) 9 50 100 150 200 f 2 4 6 8 10 12 s(f) 50 100 150 200 f 200 400 600 800 1000 S(f) Figure 2.3: The tness distribution sG(f) (left) and the cumulative tness distribution SG(f) (right). 2.2 Fitness Variance De nition 2.2.1 (Fitness variance) The tness variance 2 denotes the variance of the tness distribution s(f) before selection and ( )2 denotes the variance of the tness distribution s (f) after selection: 2 = 1 N Z fn f0 s(f) (f M)2 df = 1 N Z fn f0 f 2 s(f) df M2 (2.9) ( )2 = 1 N Z fn f0 s (f) (f M )2 df = 1 N Z fn f0 f 2 s (f) df M 2 (2.10) Note the di erence of this variance to the variance in obtaining a certain tness distribution characterized by theorem 2.0.1 2.3 Reproduction Rate De nition 2.3.1 (Reproduction rate) The reproduction rate R(f) denotes the ratio of the number of individuals with a certain tness value f after and before selection R(f) = ( s (f) s(f) : s(f) > 0 0 : s(f) = 0 (2.11) A reasonable selection method should favor good individuals by assigning them a reproduction rate R(f) > 1 and punish bad individuals by a ratio R(f) < 1. 10 2.4 Loss of Diversity During every selection phase bad individuals will be lost and be replaced by copies of better individuals. Thereby a certain amount of \genetic material" is lost that was contained in the bad individuals. The number of individuals that are replaced corresponds to the strength of the \loss of diversity". This leads to the following de nition. De nition 2.4.1 (Loss of diversity) The loss of diversity pd is the proportion of individuals of a population that is not selected during the selection phase. Theorem 2.4.1 If the reproduction rate R(f) increases monotonously in f , the loss of diversity of a selection method is pd = 1 N S(fz) S (fz) (2.12) where fz denotes the tness value such that R(fz) = 1. Proof: For all tness values f 2 (f0; fz] the reproduction rate is less than one. Hence the number of individuals that are not selected during selection is given by R fz f0 ( s(x) s (x)) dx. It follows that pd = 1 N Z fz f0 ( s(x) s (x)) dx = 1 N Z fz f0 s(x) dx Z fz f0 s (x) dx! = 1 N S(fz) S (fz) 2 The loss of diversity should be as low as possible because a high loss of diversity increases the risk of premature convergence. In his dissertation [Baker, 1989], Baker has introduced a similar measure called \reproduction rate RR". RR gives the percentage of individuals that is selected to reproduce, hence RR = 100(1 pd). 2.5 Selection Intensity The term \selection intensity" or \selection pressure" is often used in di erent contexts and for di erent properties of a selection method. Goldberg and Deb [Goldberg and Deb, 1991] and Back [Back, 1994] use the \takeover time" to de ne the selection pressure. Whitley calls the parameter c (see chapter 5) of his ranking selection method selection pressure. 11 We use the term \selection intensity" in the same way it is used in population genetic [Bulmer, 1980]. M uhlenbein has adopted the de nition and applied it to genetic algorithms [M uhlenbein and Schlierkamp-Voosen, 1993]. Recently more and more researches are using this term to characterize selection schemes [Thierens and Goldberg, 1994a; Thierens and Goldberg, 1994b; Back, 1995; Blickle and Thiele, 1995]. The change of the average tness of the population due to selection is a reasonable measure for selection intensity. In population genetic the term selection intensity was introduced to obtain a normalized and dimension-less measure. The idea is to measure the progress due to selection by the so called \selection differential", i.e. the di erence between the population average tness after and before selection. Dividing this selection di erential by the mean variance of the population tness leads to the desired dimension-less measure that is called the selection intensity. De nition 2.5.1 (Selection intensity) The selection intensity of a selection method for the tness distribution s(f) is the standardized quantity I = M M (2.13) By this, the selection intensity depends on the tness distribution of the initial population. Hence, di erent tness distributions will in general lead to di erent selection intensities for the same selection method. For comparison it is necessary to restrict oneself to a certain initial distribution. Using the normalized Gaussian distribution G(0; 1) as initial tness distribution leads to the following de nition. De nition 2.5.2 (Standardized selection intensity) The standardized selection intensity I is the expected average tness value of the population after applying the selection method to the normalized Gaussian distribution G(0; 1)(f) = 1 p2 e f2 2 : I = Z 1 1 f (G(0; 1))(f) df (2.14) The \e ective" average tness value of a Gaussian distribution with mean and variance 2 can easily be derived as M = I + . Note that this de nition of the standardized selection intensity can only be applied if the selection method is scale and translation invariant. This is the case for all selection schemes examined in this paper except proportional selection. Likewise this de nition has no equivalent in the case of discrete tness distributions. If the selection intensity for a discrete distribution has to be calculated, one must refer to De nition 2.5.1. In the remainder of this paper we use the term \selection intensity" as equivalent for \standardized selection intensity" as our intention is the comparison of selection schemes. 12 2.6 Selection Variance In addition to the selection intensity we introduce the term of \selection variance". The de nition is analogous to the de nition of the selection intensity, but here we are interested in the the new variance of the tness distribution after selection. De nition 2.6.1 (Selection variance) The selection variance is the normalized expected variance of the tness distribution of the population after applying the selection method to the tness distribution s(f), i.e. V = ( )2 2 (2.15) For comparison the standardized selection variance is of interest. De nition 2.6.2 (Standardized selection variance) The standardized selection variance V is the normalized expected variance of the tness distribution of the population after applying the selection method to the normalized Gaussian distribution G(0; 1). V = Z 1 1(f I )2 (G(0; 1))(f) df (2.16) that is equivalent to V = Z 1 1 f 2 (G(0; 1))(f) df I2 (2.17) Note that there is a di erence between the selection variance and the loss of diversity. The loss of diversity gives the proportion of individuals that are not selected, regardless of their tness value. The standardized selection variance is de ned as the new variance of the tness distribution assuming a Gaussian initial tness distribution. Hence a selection variance of 1 means that the variance is not changed by selection. A selection variance less than 1 reports a decrease in variance. The lowest possible value of V is zero, which means that the variance of the tness values of population after selection is itself zero. Again we will use the term the \selection variance" as equivalent for \standardized selection variance". 13 Chapter 3 Tournament Selection Tournament selection works as follows: Choose some number t of individuals randomly from the population and copy the best individual from this group into the intermediate population, and repeat N times. Often tournaments are held only between two individuals (binary tournament) but a generalization is possible to an arbitrary group size t called tournament size. The pseudo code of tournament selection is given by algorithm 1. Algorithm 1: (Tournament Selection) Input: The population P ( ) the tournament size t 2 f1; 2; :::; Ng Output: The population after selection P ( )0 tournament(t,J1; :::; JN): for i 1 to N do J 0 i best t individual out of t randomly picked individuals from fJ1; :::; JNg; od return fJ 0 1; :::; J 0 Ng The outline of the algorithm shows that tournament selection can be implemented very e ciently as no sorting of the population is required. Implemented in the way above it has the time complexity O(N). Using the notation introduced in the previous chapter, the entire tness distribution after selection can be predicted. The prediction will be made for the discrete (exact) tness distribution as well as for a continuous tness distribution. These results were rst published in [Blickle and Thiele, 1995]. The calculations assume that tournament selection is done with replacement. Theorem 3.0.1 The expected tness distribution after performing tournament 14 selection with tournament size t on the distribution s is T (s; t)(fi) = s (fi) = N 0@ S(fi) N !t S(fi 1) N !t1A (3.1) Proof: We rst calculate the expected number of individuals with tness fi or worse, i.e. S (fi). An individual with tness fi or worse can only win the tournament if all other individuals in the tournament have a tness of fi or worse. This means we have to calculate the probability that all t individuals have a tness of fi or worse. As the probability to choose an individual with tness fi or worse is given by S(fi) N we get S (fi) = N S(fi) N !t (3.2) Using this equation and the relation s (fi) = S (fi) S (fi 1) (see De nition 2.0.4) we obtain (3.1). 2 Equation (3.1) shows the strong in uence of the tournament size t on the behavior of the selection scheme. Obviously for t = 1 we obtain (in average) the unchanged initial distribution as T (s; 1)(fi) = N S(fi) N S(fi 1) N = S(fi) S(fi 1) = s(fi). In [Back, 1994] the probability for the individual number i to be selected by tournament selection is given by pi = N t((N i + 1)t (N i)t), under the assumption that the individuals are ordered according to their tness value f(J1) f(J2) ::: f(JN). Note that Back uses an \reversed" tness function where the best individual has the lowest index. For comparison with our results we transform the task into an maximization task using j = N i + 1: pj = N t(jt (j 1)t) 1 j N (3.3) This formula is as a special case of (3.1) with all individuals having a di erent tness value. Then s(fi) = 1 for all i 2 [1; N ] and S(fi) = i and pi = s (fi) N yields the same equation as given by Back. Note that (3.3) is not valid if some individuals have the same tness value. Example 3.0.1 Using the discrete tness distribution from Example 2.0.1 (Figure 2.2) we obtain the tness distribution shown in Figure 3.1 after applying tournament selection with a tournament size t = 10. In addition to the expected distribution there are also the two graphs shown for s (f) s(f) and s (f)+ s(f). Hence a distribution obtained from one tournament run will lie in the given interval (the con dence interval) with a probability of 68%. The high agreement between the theoretical derived results and a simulation is veri ed in Figure 3.2. Here the distributions according to (3.1) and the average of 20 simulation are shown. 15 2.5 5 7.5 10 12.5 15 f 20 40 60 80 100 s*(f) Figure 3.1: The resulting expected tness distribution and the con dence interval of 68% after applying tournament selection with a tournament size of 10. In example 3.0.1 we can see a very high variance in the distribution that arises from fact that the individuals are selected in N independent trials. In chapter 8.1 we will meet the so called \stochastic universal sampling" method that minimizes this mean variance. Theorem 3.0.2 Let s(f) be the continuous tness distribution of the population. Then the expected tness distribution after performing tournament selection with tournament size t is T ( s; t))(f) = s (f) = t s(f) S(f) N !t 1 (3.4) Proof: Analogous to the proof of the discrete case the probability of an individual with tness f or worse to win the tournament is given by S (f) = N S(f) N !t (3.5) As s (f) = d S (f) df , we obtain (3.4). 2 Example 3.0.2 Figure 3.3 shows the resulting tness distributions after applying tournament selection on the Gaussian distribution from Example 2.0.2. 16 5 10 15 f 0 25 50 75 100 s(f) Figure 3.2: Comparison between theoretical derived distribution (|) and simulation (-) for tournament selection (tournament size t = 10). 3.1 Concatenation of Tournament Selection An interesting property of the tournament selection is the concatenation of several selection phases. Assume an arbitrary population with the tness distribution s. We apply rst tournament selection with tournament size t1 to this population and then on the resulting population tournament selection with tournament size t2. The obtained tness distribution is the same as if only one tournament selection with the tournament size t1t2 is applied to the initial distribution s. Theorem 3.1.1 Let s be a continuous tness distribution and t1; t2 1 two tournament sizes. Then the following equation holds T ( T ( s; t1); t2)(f) = T ( s; t1 t2)(f) (3.6) Proof: T ( T ( s; t1); t2)(f) = t2 T ( s; t1)(f) 1 N Z f f0 T ( s; t1)(x) dx!t2 1 = t2t1 s(f) 1 N Z f f0 s(x) dx!t1 1 1 N Z f f0 t1 s(x) 1 N Z x f0 s(y) dy t1 1 dx!t2 1 As Z f f0 t1 s(x) 1 N Z x f0 s(y) dy t1 1 dx = N 1 N Z f f0 s(x) dx!t1 17 50 100 150 200 250 f 0 5 10 15 20 25 30 s(f) Figure 3.3: Gaussian tness distribution approximately leads again to Gaussian distributions after tournament selection (from left to right: initial distribution, t =2, t = 5, t = 10). we can write T ( T ( s; t1); t2)(f) = t2t1 s(f) 1 N Z f f0 s(x) dx!t1 10@ 1 N Z f f0 s(x) dx!t11At2 1 = t2t1 s(f) 1 N Z f f0 s(x) dx!t1 1 1 N Z f f0 s(x) dx!t1(t2 1) = t2t1 s(f) 1 N Z f f0 s(x) dx!t1t2 1 = T ( s; t1 t2)(f) 2 In [Goldberg and Deb, 1991] the proportion P of bestt individuals after selections with tournament size t (without recombination) is given to P = 1 (1 P0)t (3.7) This can be obtained as a special case from Theorem 3.1.1, if only the bestt individuals are considered. Corollary 3.1.1 Let s(f) be a tness distribution representable as s(f) = g(f)0 @R f f0 g(x) dx N 1A 1 (3.8) 18 with 1 and R fn f0 g(x) dx = N . Then the expected distribution after tournament with tournament size t is s (f) = t g(f)0 @R f f0 g(x) dx N 1A t 1 (3.9) Proof: If we assume that s(f) is the result of applying tournament selection with tournament size on the distribution g(f), (3.9) is directly obtained using Theorem 3.1.1. 2 3.2 Reproduction Rate Corollary 3.2.1 The reproduction rate of tournament selection is RT (f) = s (f) s(f) = t S(f) N !t 1 (3.10) This is directly obtained by substituting (3.4) in (2.11). Individuals with the lowest tness have a reproduction rate of almost zero and the individuals with the highest tness have a reproduction rate of t. 3.3 Loss of Diversity Theorem 3.3.1 The loss of diversity pd;T of tournament selection is pd;T (t) = t 1 t 1 t t t 1 (3.11) Proof: S(fz) can be determined using (3.10) (refer to Theorem 2.4.1 for the de nition of fz): S(fz) = N t 1 t 1 (3.12) Using De nition 2.4.1 and (3.12) we obtain: pd;T (t) = 1 N S(fz) S (fz) = S(fz) N S(fz) N !t = t 1 t 1 t t t 1 2 It turns out that the number of individuals lost increases with the tournament size (see Fig. 3.4). About the half of the population is lost at tournament size t = 5. 19 5 10 15 20 25 30 0 0.2 0.4 0.6 0.8 1 tournament size t p (t) d Figure 3.4: The loss of diversity pd;T (t) for tournament selection. 3.4 Selection Intensity To calculate the selection intensity we calculate the average tness of the population after applying tournament selection on the normalized Gaussian distribution G(0; 1). Using De nition 2.1.1 we obtain IT (t) = Z 1 1 t x 1 p2 e x2 2 Z x 1 1 p2 e y2 2 dy!t 1 dx (3.13) These integral equations can be solved analytically for the cases t = 1; : : : ; 5 ([Blickle and Thiele, 1995; Back, 1995; Arnold et al., 1992]): IT (1) = 0 IT (2) = 1 p IT (3) = 3 2p IT (4) = 6 p arctanp2 IT (5) = 10 p ( 3 2 arctanp2 1 4) 20 For a tournament size of two Thierens and Goldberg derive the same average tness value [Thierens and Goldberg, 1994a] in a completely di erent manner. But their formulation can not be extended to other tournament sizes. For larger tournament sizes (3.13) can be accurately evaluated by numerical integration. The result is shown on the left side of Figure 3.5 for a tournament size from 1 to 30. But an explicit expression of (3.13) may not exist. By means of the steepest descent method (see, e.g. [Henrici, 1977]) an approximation for large tournament sizes can be given. But even for small tournament sizes this approximation gives acceptable results. The calculations lead to the following recursion equation: IT (t)k qck(ln(t) ln(IT (t)k 1)) (3.14) with IT (t)0 = 1 and k the recursion depth. The calculation of the constants ck is di cult. Taking a rough approximation with k = 2 the following equation is obtained that approximates (3.13) with an relative error of less than 2.4% for t 2 [2; 5], for tournament sizes t > 5 the relative error is less than 1%: IT (t) r2(ln(t) ln(q4:14 ln(t))) (3.15) 5 10 15 20 25 30 t 0 0.5 1 1.5 2 2.5 I(t) 5 10 15 20 25 30 t 0 0.2 0.4 0.6 0.8 1 V(t) Figure 3.5: Dependence of the selection intensity (left) and selection variance (right) on the tournament size t. 3.5 Selection Variance To determine the selection variance we need to solve the equation VT (t) = Z 1 1 t (x IT (t))2 1 p2 e x2 2 Z x 1 1 p2 e y2 2 dy!t 1 dx (3.16) For a binary tournament we have VT (2) = 1 1 21 Here again (3.16) can be solved by numerical integration. The dependence of the selection variance on the tournament size is shown on the right of Figure 3.5. To obtain a useful analytic approximation for the selection variance, we perform a symbolic regression using the genetic programming optimization method. Details about the way the data was computed can be found in appendix A. The following formula approximates the selection variance with an relative error of less than 1.6% for t 2 f1; : : : ; 30g: VT (t) s2:05 + t 3:14t 3 2 ; t 2 f1; : : : ; 30g (3.17) 22 Chapter 4 Truncation Selection In Truncation selection with threshold T only the fraction T best individuals can be selected and they all have the same selection probability. This selection method is often used by breeders and in population genetic [Bulmer, 1980; Crow and Kimura, 1970]. M uhlenbein has introduced this selection scheme to the domain of genetic algorithms [M uhlenbein and Schlierkamp-Voosen, 1993]. This method is equivalent to ( ; )-selection used in evolution strategies with T = [Back, 1995]. The outline of the algorithm is given by algorithm 2. Algorithm 2: (Truncation Selection) Input: The population P ( ), the truncation threshold T 2 [0; 1] Output: The population after selection P ( )0 truncation(T ,J1; :::; JN): J sorted population J according tness with worst individual at the rst position for i 1 to N do r randomf [(1 T )N ]; : : : ; Ng J 0 i Jr od return fJ 0 1; :::; J 0 Ng As a sorting of the population is required, truncation selection has a time complexity of O(N lnN). Although this method has been investigated several times we will describe this selection method using the methods derived here, as additional properties can be observed. Theorem 4.0.1 The expected tness distribution after performing truncation se23 lection with threshold T on the distribution s is (s; T )(fi) = s (fi) = 8><>: 0 : S(fi) (1 T )N S(fi) (1 T )N T : S(fi 1) (1 T )N < S(fi) s(fi) T : else (4.1) Proof: The rst case in (4.1) gives zero o spring to individuals with a tness value below the truncation threshold. The second case re ects the fact that threshold may lie within si. Then only the fraction above the threshold (Si (1 T )N) may be selected. These fraction is in average copied 1 T times. The last case in (4.1) gives all individuals above the threshold the multiplication factor 1 T that is necessary to keep the population size constant. 2 Theorem 4.0.2 Let s(f) be the continuous distribution of the population. Then the expected tness distribution after performing truncation selection with threshold T is ( s; T )(f) = ( s(f) T : S(f) > (1 T )N 0 : else (4.2) Proof: As S(f) gives the cumulative tness distribution, it follows from the construction of truncation selection that all individuals with S(f) < (1 T )N are truncated. As the population size is kept constant during selection, all other individuals must be copied in average 1 T times. 2 4.1 Reproduction Rate Corollary 4.1.1 The reproduction rate of truncation selection is R (f) = ( 1 T : S(f) > (1 T )N 0 : else (4.3) 4.2 Loss of Diversity By construction of the selection method only the fraction T of the population will be selected, i.e. the loss of diversity is pd; (T ) = 1 T (4.4) 24 4.3 Selection Intensity The results presented in this subsection have been already derived in a di erent way in [Crow and Kimura, 1970]. Theorem 4.3.1 The selection intensity of truncation selection is I (T ) = 1 T 1 p2 e f2 c 2 (4.5) where fc is determined by T = R1 fc 1 p2 e f2 2 df . Proof: The selection intensity is de ned as the average tness of the population after selection assuming an initial normalized Gaussian distributionG(0; 1), hence I = R1 1 (G(0; 1))(f) f df . As no individual with a tness value worse than fc will be selected, the lower integration bound can be replaced by fc. Here fc is determined by S(fc) = (1 T )N = 1 T (4.6) because N = 1 for the normalized Gaussian distribution. So we can compute I (T ) = Z 1 fc 1 T 1 p2 e f2 2 f df = 1 T 1 p2 e f2 c 2 Here fc is determined by (4.6). Solving (4.6) for T yields T = 1 Z fc 1 1 p2 e f2 2 df = Z 1 fc 1 p2 e f2 2 df 2 A lower bound for the selection intensity reported by [M uhlenbein and Voigt, 1995] is I (T ) q 1 T T . Figure 4.1 shows on the left the selection intensity in dependence of parameter T . 4.4 Selection Variance Theorem 4.4.1 The selection variance of truncation selection is V (T ) = 1 I (T )(I (T ) fc) (4.7) 25 0.2 0.4 0.6 0.8 1 T 0 0.5 1 1.5 2 2.5 3 3.5 4 I(T) 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1 T V(T) Figure 4.1: Selection intensity (left) and selection variance (right) of truncation selection. Sketch of proof: The substitution of (4.2) in the de nition equation (2.17) gives V (T ) = Z 1 fc f 2 1 T 1 p2 e f2 2 df I (T ))2 After some calculations this equation can be simpli ed to (4.7). 2 The selection variance is plotted on the right of Figure 4.1. (4.7) has also been derived in [Bulmer, 1980]. 26 Chapter 5 Linear Ranking Selection Ranking selection was rst suggested by Baker to eliminate the serious disadvantages of proportionate selection [Grefenstette and Baker, 1989; Whitley, 1989]. For ranking selection the individuals are sorted according their tness values and the rank N is assigned to the best individual and the rank 1 to the worst individual. The selection probability is linearly assigned to the individuals according to their rank:pi = 1 N + ( + ) i 1 N 1 ; i 2 f1; : : : ; Ng (5.1) Here N is the probability of the worst individual to be selected and + N the probability of the best individual to be selected. As the population size is held constant, the conditions + = 2 and 0 must be ful lled. Note that all individuals get a di erent rank, i.e. a di erent selection probability, even if they have the same tness value. Koza [Koza, 1992] determines the probability by a multiplication factor rm that determines the gradient of the linear function. A transformation into the form of (5.1) is possible by = 2 rm+1 and + = 2rm rm+1 . Whitley [Whitley, 1989] describes the ranking selection by transforming an equally distributed random variable 2 [0; 1] to determine the index of the selected individual j = b N 2(c 1) c qc2 4(c 1) c (5.2) where c is a parameter called \selection bias". Back has shown that for 1 < c 2 this method is almost identical to the probabilities in (5.1) with + = c [Back, 1994]. 27 Algorithm 3: (Linear Ranking Selection) Input: The population P ( ) and the reproduction rate of the worst individual 2 [0; 1] Output: The population after selection P ( )0 linear ranking( ,J1; :::; JN): J sorted population J according tness with worst individual at the rst position s0 0 for i 1 to N do si si 1 + pi (Equation 5.1) od for i 1 to N do r random[0,sN [ J 0 i Jl such that sl 1 r < sl od return fJ 0 1; :::; J 0 Ng The pseudo-code implementation of linear ranking selection is given by algorithm 3. The method requires the sorting of the population, hence the complexity of the algorithm is dominated by the complexity of sorting, i.e. O(N logN). Theorem 5.0.2 The expected tness distribution after performing ranking selection with on the distribution s is R(s; )(fi) = s (fi) = s(fi)N 1 N 1 + 1 N 1 S(fi)2 S(fi 1)2 (5.3) Proof: We rst calculate the expected number of individuals with tness fi or worse, i.e. S (fi). As the individuals are sorted according to their tness value this number is given by the sum of the probabilities of the S (fi) less t individuals: S (fi) = N S(fi) X j=1 pj = S(fi) + + N 1 S(fi) X j=1 j 1 = S(fi) + + N 1 1 2S(fi) (S(fi) 1) As + = 2 and s (fi) = S (fi) S (fi 1) we obtain s (fi) = (S(fi) S(fi 1)) + 1 N 1 (S(fi)(S(fi) 1) S(fi 1)(S(fi 1) 1)) 28 = s(fi) + 1 N 1 S(fi)2 S(fi 1)2 s(fi) = s(fi)N 1 N 1 + 1 N 1 S(fi)2 S(fi 1)2 2 Example 5.0.1 As an example we use again the tness distribution of the \wallfollowing-robot" from Example 2.0.1. The resulting distribution after ranking selection with = 0:1 is shown in Figure 5.1. Here again the con dence interval is shown. A comparison between theoretical analysis and the average of 20 simulations is shown in Figure 5.2. Again a very high agreement with the theoretical results is observed. 2.5 5 7.5 10 12.5 15 17.5 f 0 50 100 150 200 250 300 350 400 s*(f) Figure 5.1: The resulting expected tness distribution and the con dence interval of 68% after applying ranking selection with = 0:1: Theorem 5.0.3 Let s(f) be the continuous tness distribution of the population. Then the expected tness distribution after performing ranking selection R with on the distribution s is R( s; )(f) = s (f) = s(f) + 21 N S(f) s(f) (5.4) Proof: As the continuous form of (5.1) is given by p(x) = 1 N ( + + N x) we calculate S(f) using + = 2 : S (f) = N Z S(f) 0 p(x) dx 29 2.5 5 7.5 10 12.5 15 17.5 f 0 50 100 150 200 250 300 350 400 Figure 5.2: Comparison between theoretical derived distribution (|) and the average of 20 simulations (-) for ranking selection with = 1 N . = Z S(f) 0 dx + 21 N Z S(f) 0 x dx = S(f) + 1 N S(f)2 As s (f) = d S (f) df , (5.4) follows. 2 Example 5.0.2 Figure 5.3 shows the the initial continuous tness distribution sG and the resulting distributions after performing ranking selection. 5.1 Reproduction Rate Corollary 5.1.1 The reproduction rate of ranking selection is RR(f) = + 21 N S(f) (5.5) This equation shows that the worst t individuals have the lowest reproduction rate R(f0) = and the best t individuals have the highest reproduction rate R(fn) = 2 = +. This can be derived from the construction of the method as N is the selection probability of the worst t individual and + N the one of the best t individual. 30 25 50 75 100 125 150 175 200 0 2.5 5 7.5 10 12.5 15 17.5 20 s*(f) fitness f Figure 5.3: Gaussian tness distribution sG(f) and the resulting distributions after performing ranking selection with = 0:5 and = 0 (from left to right). 5.2 Loss of Diversity Theorem 5.2.1 The loss of diversity pd;R( ) of ranking selection is pd;R( ) = (1 )1 4 (5.6) Proof: Using Theorem 2.4.1 and realizing that S(fz) = N2 we calculate: pd;R( ) = 1 N S(fz) S (fz) = 1 N S(fz) S(fz) 1 N S(fz)2! = 1 N N2 N2 1 N N2 4 ! = 1 4(1 ) 2 Baker has derived this result using his term of \reproduction rate" [Baker, 1989]. Note that the loss of diversity is again independent of the initial distribution. 31 5.3 Selection Intensity Theorem 5.3.1 The selection intensity of ranking selection is IR( ) = (1 ) 1 p (5.7) Proof: Using the de nition of the selection intensity (De nition 2.5.2) and using the Gaussian function for the initial tness distribution we obtain IR( ) = Z 1 1 x 1 p2 e x2 2 + 2(1 ) Z x 1 1 p2 e y2 2 dy! dx = p2 Z 1 1 xe x2 2 dx+ 1  Z 1 1 xe x2 2 Z x 1 e y2 2 dy dx As the rst summand is 0 and R1 1 xe x2 2 R x 1 e y2 2 dy dx = p we obtain (5.7). 2 The selection intensity of ranking selection is shown in Figure 5.4 (left) in dependence of the parameter . 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 η I( ) η 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 η V( ) η Figure 5.4: Selection intensity (left) and selection variance (right) of ranking selection. 5.4 Selection Variance Theorem 5.4.1 The selection variance of ranking is VR( ) = 1 (1 )2 = 1 IR( )2 (5.8) Proof: Substituting (5.4) into the de nition equation (2.17) leads to VR( ) = Z 1 1 f 2 1 p2 e f2 2 + 2(1 ) Z f 1 1 p2 e y2 2 dy! df IR( )2 32 VR( ) = p2 Z 1 1 f 2e f2 2 df + 1  Z 1 1 f 2e f2 2 Z f 1 e y2 2 dy df IR( )2 Using the relations B.7 and B.8 we obtain VR( ) = + (1 ) IR( )2 = 1 IR( )2 2 The selection variance of ranking selection is plotted on the right of Figure 5.4. 33 Chapter 6 Exponential Ranking Selection Exponential ranking selection di ers from linear ranking selection in that the probabilities of the ranked individuals are exponentially weighted. The base of the exponent is the parameter 0 < c < 1 of the method. The closer c is to 1 the lower is the \exponentiality" of the selection method. We will discuss the meaning and the in uence of this parameter in detail in the following. Again the rank N is assigned to the best individual and the rank 1 to the worst individual. Hence the probabilities of the individuals are given by pi = cN i PNj=1 cN j ; i 2 f1; :::; Ng (6.1) The sum PNj=1 cN j normalizes the probabilities to ensure that PNi=1 pi = 1. As PNj=1 cN j = cN 1 c 1 we can rewrite the above equation: pi = c 1 cN 1cN i ; i 2 f1; :::; Ng (6.2) The algorithm for exponential ranking (algorithm 4) is similar to the algorithm for linear ranking. The only di erence lies in the calculation of the selection probabilities. Theorem 6.0.2 The expected tness distribution after performing exponential ranking selection with c on the distribution s is E(s; c; N)(fi) = s (fi) = N cN cN 1c S(fi) cs(fi) 1 (6.3) 34 Algorithm 4: (Exponential Ranking Selection) Input: The population P ( ) and the ranking base c 2]0; 1] Output: The population after selection P ( )0 exponential ranking(c,J1; :::; JN): J sorted population J according to tness with worst individual at the rst position s0 0 for i 1 to N do si si 1 + pi (Equation 6.2) od for i 1 to N do r random[0,sN [ J 0 i Jl such that sl 1 r < sl od return fJ 0 1; :::; J 0 Ng Proof: We rst calculate the expected number of individuals with tness fi or worse, i.e. S (fi). As the individuals are sorted according to their tness value this number is given by the sum of the probabilities of the S (fi) less t individuals: S (fi) = N S(fi) X j=1 pj = N c 1 cN 1 S(fi) X j=1 cN j and with the substitution k = N j S (fi) = N c 1 cN 1 N 1 X k=N S(fi) ck = N c 1 cN 1 0@N 1 X k=0 ck N S(fi) 1 X k=0 ck1A = N c 1 cN 1 cN 1 c 1 cN S(fi) c 1 ! = N 1 cN cN 1c S(fi)! As s (fi) = S (fi) S (fi 1) we obtain s (fi) = N c 1 cN 1 c S(fi 1) c S(fi) 35 = N c 1 cN 1c S(fi) cs(fi) 1 2 Example 6.0.1 As an example we use again the tness distribution of the \wallfollowing-robot" from Example 2.0.1. The resulting distribution after exponential ranking selection with c = 0:99 and N = 1000 is shown in Figure 6.1 as a comparison to the average of 20 simulations. Again a very high agreement with the theoretical results is observed. 2.5 5 7.5 10 12.5 15 20 40 60 80 100 Figure 6.1: Comparison between theoretical derived distribution (|) and the average of 20 simulations (-) for ranking selection with c = 0:99. Theorem 6.0.3 Let s(f) be the continuous tness distribution of the population. Then the expected tness distribution after performing exponential ranking selection E with c on the distribution s is E( s; c)(f) = s (f) = N cN cN 1 ln c s(f) c S(f) (6.4) Proof: As the continuous form of (6.2) is given by p(x) = cN x R N 0 cN x and R cx = 1 ln ccx we calculate: S (f) = N cN ln c cN 1 Z S(f) 0 c x dx 36 = N cN cN 1[c x] S(f) 0 = N cN cN 1 1 c S(f) As s (f) = d S (f) df , (6.4) follows. 2 It is useful to introduce a new variable = cN to eliminate the explicit dependence on the population size N : E(s; )(f) = s (f) = ln 1 s(f) S(f) N (6.5) The meaning of will become apparent in the next section. 6.1 Reproduction Rate Corollary 6.1.1 The reproduction rate of exponential ranking selection is RE(f) = ln 1 S(f) N (6.6) This equation shows that the worst t individuals have the lowest reproduction rate R(f0) = ln 1 and the best t individuals have the highest reproduction rate R(fn) = ln 1 . Hence we obtain a natural explanation of the variable , as R(f0) R(fn) = : it describes the ratio of the reproduction rate of the worst and the best individual. Note that c < 1 and hence cN 1 for large N , i.e. the interesting region of values for is in the range from 10 20; : : : ; 1. 6.2 Loss of Diversity Theorem 6.2.1 The loss of diversity pd;E( ) of exponential ranking selection is pd;E( ) = 1 ln 1 ln ln 1 (6.7) Proof: First we calculate from the demand R(fz) = 1 : S(fz) N = ln 1 ln ln (6.8) Using Theorem 2.4.1 we obtain: pd;E( ) = 1 N S(fz) S (fz) 37 = ln 1 ln ln 1 1 ln 1 ln ln ! = ln 1 ln ln 1 1 1 ln = 1 ln 1 ln ln 1 2 The loss of diversity is shown in gure 6.2. -15 -10 -5 0 α 0.25 0.5 0.75 1 -20 10 10 10 10 10 p (α) d Figure 6.2: The loss of diversity pd;E( ) for exponential ranking selection. Note the logarithmic scale of the -axis. 6.3 Selection Intensity and Selection Variance The selection intensity and the selection variance are very di cult to calculate for exponential ranking. If we recall the de nition of the selection intensity (de nition 2.5.2) we see that the integral of the Gaussian function occurs as exponent in an inde nite integral. Hence we restrict ourselves here to numerical calculation of the selection intensity as well as of the selection variance. The selection intensity and the selection variance of exponential ranking selection is shown in Figure 6.3 in dependence of the parameter . An approximation formula can be derived using the genetic programming optimization method for symbolic regression (see Appendix A). The selection 38 -15 -10 -5 0 α 0.5 1 1.5 2 2.5 I 10 10 10 10 10 -20 -15 -10 -5 0 α 10 10 10 10 10 -20 0 0.25 0.5 0.75 1 V(α) Figure 6.3: Selection intensity (left) and selection variance (right) of exponential ranking selection. Note the logarithmic scale of the -axis. intensity of exponential ranking selection can be approximated with a relative error of less than 6% for 2 [10 20; 0:8] by IE( ) 0:588ln ln 3:69 (6.9) Similar, an approximation for the selection variance of exponential ranking selection can be found. The following formula approximates the selection variance with an relative error of less than 5% for 2 [10 20; 0:8]: VE( ) ln 1:2 + 2:8414 2:225 ln (6.10) 39 Chapter 7 Proportional Selection Proportional selection is the original selection method proposed for genetic algorithms by Holland [Holland, 1975]. We include the analysis of the selection method mostly because of its fame. Algorithm 5: (Proportional Selection) Input: The population P ( ) Output: The population after selection P ( )0 proportional(J1; :::; JN): s0 0 for i 1 to N do si si 1 + fi M od for i 1 to N do r random[0,sN [ J 0 i Jl such that sl 1 r < sl od return fJ 0 1; :::; J 0 Ng The probability of an individual to be selected is simply proportionate to its tness value, i.e. pi = fi NM (7.1) Algorithm 5 displays the method using a pseudo code formulation. The time complexity of the algorithm is O(N). Obviously, this mechanism will only work if all tness values are greater than zero. Furthermore the selection probabilities strongly depend on the scaling of the tness function. As an example, assume a population of 10 individuals with the best individual having a tness value of 11 and the worst a tness value of 40 1. The selection probability for the best individual is hence pb 16:6% and for the worst pw 1:5%. If we now translate the tness function by 100, i.e. we just add a the constant value 100 to every tness value, we calculate p0b 10:4% and p0w 9:5%. The selection probabilities of the best and the worst individual are now almost identical. This undesirable property arises from the fact that proportional selection is not translation invariant (see e.g. [de la Maza and Tidor, 1993]). Because of this several scaling methods have been proposed to keep proportional selection working, e.g. linear static scaling, linear dynamic scaling, exponential scaling, logarithmic scaling [Grefenstette and Baker, 1989]; sigma truncation [Brill et al., 1992]. Another method to improve proportional selection is the \over selection" of a certain percentage of the best individuals, i.e. to force that 80 % of all individuals are taken from the best 20 % of the population. This method was used in [Koza, 1992]. In [M uhlenbein and Schlierkamp-Voosen, 1993] it is already stated that \these modi cations are necessary, not tricks to speed up the algorithm". The following analysis will con rm this statement. Theorem 7.0.1 The expected tness distribution after performing proportional selection on the distribution s is P (s)(fi) = s (f) = s(f) f M (7.2) 7.1 Reproduction Rate Corollary 7.1.1 The reproduction rate of proportional selection is RP (f) = f M (7.3) The reproduction rate is proportionate to the tness value of an individual. If all tness values are close together (as it was in the example at the beginning of this chapter) all individuals have almost the same reproduction rate R 1. Hence no selection takes place anymore. 7.2 Selection Intensity As proportional selection is not translation invariant our original de nition of standardized selection intensity cannot be applied. We will cite here the results obtained by M uhlenbein and Schlierkamp-Voosen [M uhlenbein and SchlierkampVoosen, 1993]. Theorem 7.2.1 [M uhlenbein and Schlierkamp-Voosen, 1993] The standardized selection intensity of proportional selection is IP = M (7.4) 41 where  is the mean variance of the tness values of the population before selection. Proof: See [M uhlenbein and Schlierkamp-Voosen, 1993]. 2 The other properties we are interested in like the selection variance an the loss of diversity are di cult to investigate for proportional selection. The crucial point is the explicit occurrence of the tness value in the expected tness distribution after selection (7.2). Hence an analysis is only possible if we make some further assumptions on the initial tness distribution. This is why other work on proportional selection assume some special functions to be optimized (e.g. [Goldberg and Deb, 1991]). Another weak point is that the selection intensity even in the early stage of the optimization (when the variance is high) is too low. Measurements on a broad range of problems showed sometimes a negative selection intensity. This means that in some cases (due to sampling) there is a decrease in average population tness. Seldom a very high selection intensity occurred (I 1:8) if a superindividual was created. But the measured average selection intensity was in range of 0.1 to 0.3. All the undesired properties together led us to the conclusion that proportional selection is a very unsuited selection scheme. Informally one can say that the only advantage of proportional selection is that it is so di cult to prove the disadvantages. 42 Chapter 8 Comparison of Selection Schemes In the subsequent sections the selection methods are compared according to their properties derived in the preceding chapters. First we will compare the reproduction rates of selection methods and derive an uni ed view of selection schemes. Section 8.2 is devoted to the comparison of the selection intensity and gives a convergence prediction for simple genetic algorithm optimizing the ONEMAX function. The selection intensity is also used in the subsequent sections to compare the methods according to their loss of diversity and selection variance. We will take into account proportional selection only in the rst two subsections when the reproduction rate and the selection intensity are analyzed. In other comparisons it is neglected as it withdraws itself an analysis of the properties we are interested in. 8.1 Reproduction Rate and Universal Selection The reproduction rate simply gives the number of expected o spring of an individual with a certain tness value after selection. But in the preceding chapters only the reproduction rate for the continuous case have been considered. Table 8.1 gives the equations for the discrete (exact) case. They have been derived using the exact o spring equations (3.1), (4.1), (5.3), (6.3) and (7.2) and doing some simple algebraic manipulations. The examples in the preceding chapter showed a large mean variation of the tness distributions after selection. In the following, we will see that this mean variation can be almost completely eliminated by using the reproduction rate and the so called \stochastic universal sampling". As can be seen from table 8.1 we can calculate the expected distribution in advance without carrying out a \real" selection method. This calculation also enables us to use stochastic universal sampling (SUS) [Baker, 1987] for all selection schemes discussed herein. The SUS algorithm can be stated to be an optimal sampling algorithm. It has zero bias, i.e. no deviation between the expected reproduction rate and the 43 Selection Method Reproduction Rate Tournament RT (fi) = N s(fi) S(fi) N t S(fi 1) N t Truncation R (fi) = 8>>><>>: 0 : S(fi) (1 T )N S(fi) (1 T )N s(fi)T : S(fi 1) (1 T )N < S(fi) 1 T : else Linear Ranking RR(fi) = N 1 N 1 + 1 N 1 (2S(fi) s(fi)) Exponential Ranking RE(fi) = N s(fi) ln 1 S(fi) N s(fi) N 1 Proportional RP (fi) = fi M Table 8.1: Comparison of the reproduction rate of the selection methods for discrete distributions. algorithmic sampling frequency. Furthermore, SUS has a minimal spread, i.e. the range of the possible values for s0(fi) is s0(fi) 2 fbs (fi)c; ds (fi)eg (8.1) The outline of the SUS algorithm is given by algorithm 6. The standard sampling mechanism uses one spin of a roulette wheel (divided into segments for each individual with an the segment size proportional to the reproduction rate) to determine one member of the next generation. Hence, N trials have to be performed to obtain an entire population. As these trials are independent of each other a relatively high variance in the outcome is observed (see also chapter 2 and theorem 2.0.1). This is also the case for tournament selection although there is no explicitly used roulette wheel sampling. In contrary for SUS only a single spin of the wheel is necessary as the roulette has N markers for the \winning individuals" and hence all individuals are chosen at once. By means of the SUS algorithm the outcome of a certain run of the selection scheme is as close as possible to the expected behavior, i.e. the mean variation is minimal. Even though it is not clear whether there any performance advantages in using SUS, it makes the run of a selection method more \predictable". To be able to apply SUS one has to know the expected number of o spring of each individual. Baker has applied this sampling method only to linear ranking selection as here the expected number of o spring is known by construction (see chapter 5). As we have derived this o spring values for the selection methods discussed in the previous chapters it is possible to use stochastic universal sampling for all these selections schemes. Hence, we may obtain a uni ed view of selection schemes, if we neglect the way the reproduction rates were derived and construct an \universal selection method" in the following way: First we compute 44 the tness distribution of the population. Next the expected reproduction rates are calculated using the equations derived in the proceeding chapters and summarized in table 8.1. In the last step SUS is used to obtain the new population after selection. This algorithm is given in algorithm 7 and the SUS algorithm is outlined by algorithm 6. Algorithm 6: (Stochastic Universal Sampling) Input: The population P ( ) and the reproduction rate for each tness value Ri 2 [0; N ] Output: The population after selection P ( )0 SUS(R1; : : : ; Rn; J1; : : : ; JN): sum 0 j 1 ptr random[0,1) for i 1 to N do sum sum+Ri where Ri is the reproduction rate of individual Ji while (sum > ptr) do J 0 j Ji j j + 1 ptr ptr + 1 od od return fJ 0 1; :::; J 0 Ng Algorithm 7: (Universal Selection Method) Input: The population P ( ) Output: The population after selection P ( )0 universal selection(J1; : : : ; JN): s tness distribution(J1; : : : ; JN) r reproduction rate(s) J 0 SUS(r; J) return J 0 The time complexity of the universal selection method is O(N lnN) as the tness distribution has to be computed. Hence, if we perform \tournament selection" with this algorithm we pay the lower mean variation with a higher com45 putational complexity. 8.2 Comparison of the Selection Intensity Selection Method Selection Intensity Tournament IT (t) q2(ln t ln(p4:14 ln t)) Truncation I (T ) = 1 T 1 p2 e f2 c 2 Linear Ranking IR( ) = (1 ) 1 p Exponential Ranking IE( ) 0:588 ln ln 3:69 Fitness Proportionate IP = M Table 8.2: Comparison of the selection intensity of the selection methods. As the selection intensity is a very important property of the selection method, we give in table 8.3 some settings for the three selection methods that yield the same selection intensity. I 0.34 0.56 0.84 1.03 1.16 T :t 2 3 4 5 R: 0.4 0 :T 0.8 0.66 0.47 0.36 0.30 E: 0.29 0.12 0.032 9:8 10 3 3:5 10 3 E:c(N = 1000) 0.999 0.998 0.997 0.995 0.994 I 1.35 1.54 1.87 2.16 T :t 7 10 20 40 :T 0.22 0.15 0.08 0.04 E: 4:7 10 4 2:5 10 5 10 9 2:4 10 18 E:c(N = 1000) 0.992 0.989 0.979 0.960 Table 8.3: Parameter settings for truncation selection , tournament selection T , linear ranking selection R, and exponential ranking selection E to achieve the same selection intensity I. The importance of the selection intensity is based on the fact that the behavior of a simple genetic algorithm can be predicted if the tness distribution is normally distributed. In [M uhlenbein and Schlierkamp-Voosen, 1993] a prediction is 46 made for a genetic algorithm optimizing the ONEMAX (or bit-counting) function. Here the tness is given by the number of 1's in the binary string of length n. Uniform crossing-over is used and assumed to be random process which creates a binomial tness distribution. As a result, after each recombination phase the input of the next selection phase approximates a Gaussian distribution. Hence, a prediction of this optimization using the selection intensity should be possible. For a su ciently large population M uhlenbein calculates p( ) = 1 2 1 + sin( I pn + arcsin(2p0 1))! (8.2) where p0 denotes the fraction of 1's in the initial random population and p( ) the fraction of 1's in generation . Convergence is characterized by the fact that p( c) = 1 so the convergence time for the special case of p0 = 0:5 is given by c = 2 pn I . M uhlenbein derived this formula for truncation selection, where only the selection intensity is used. Thereby it is straightforward to give the convergence time for any other selection method, by substituting I with the corresponding terms derived in the preceding sections. For tournament selection we have T;c(t) 2s n 2(ln t lnp4:14 ln t) (8.3) for truncation selection ;c(T ) = T p n p2 e f2 c 2 (8.4) for linear ranking selection ;c( ) = p n 2(1 ) (8.5) and for exponential ranking selection E;c( ) 2:671pn3:69 ln ln (8.6) 8.3 Comparison of Loss of Diversity Table 8.4 summarizes the loss of diversity for the selection methods. It is di cult to compare these relations directly as they depend on di erent parameters that are characteristic for the speci c selection method, e.g., the tournament size t for tournament selection, the threshold T for truncation selection, etc. Hence, one has to look for an independent measure to eliminate these parameters 47 Selection Method Loss of Diversity Tournament pd;T (t) = t 1 t 1 t t t 1 Truncation pd; (T ) = 1 T Linear Ranking pd;R( ) = (1 )1 4 Exponential Ranking pd;E( ) = 1 ln 1 ln ln 1 Table 8.4: Comparison of the loss of diversity of the selection methods and to be able to compare the loss of diversity. We chose this measure to be the selection intensity: The loss of diversity of the selection methods is viewed as a function of the selection intensity. To calculate the corresponding graph one rst computes the value of the parameter of a selection method (i.e. t for tournament selection, T for truncation selection, for linear ranking selection, and for exponential ranking selection) that is necessary to achieve a certain selection intensity. With this value the loss of diversity is then obtained using the corresponding equations, i.e. (3.11), (4.4), (5.6), (6.7). Figure 8.1 shows the result of this comparison: the loss of diversity for the di erent selection schemes in dependence of the selection intensity. To achieve the same selection intensity more bad individuals are replaced using truncation selection than using tournament selection or one of the ranking selection schemes, respectively. This means that more \genetic material" is lost using truncation selection. If we suppose that a lower loss of diversity is desirable as it reduces the risk of premature convergence, we expect that truncation selection should be outperformed by the other selection methods. But in general it depends on the problem and on the representation of the problem to be solved whether a low loss of diversity is \advantageous". But with gure 8.1 one has a useful tool at hand to make the right decision for a particular problem. Another interesting fact can be observed if we look again at table 8.4: The loss of diversity is independent of the initial tness distribution. Nowhere in the derivation of these equations a certain tness distribution was assumed and nowhere the tness distribution s(f) occurs in the equations. In contrary, the (standardized) selection intensity and the (standardized) selection variance are computed for a certain initial tness distribution (the normalized Gaussian distribution). Hence, the loss of diversity can be viewed as an inherent property of a selection method. 8.4 Comparison of the Selection Variance We use again the same mechanism to compare the selection variance we used in the preceding section, i.e., the selection variance is viewed as a function of the 48 0.25 0.5 0.75 1 1.25 1.5 1.75 2 I 0 0.2 0.4 0.6 0.8 1 pd(I) Figure 8.1: The dependence of the loss of diversity pd on the selection intensity I for tournament selection (3|3), truncation selection (4 { { 4), linear ranking selection (? ?), and exponential ranking selection (2 { 2). Note that for tournament selection only the dotted points on the graph correspond to valid (integer) tournament sizes. selection intensity. Figure 8.2 shows the dependence of the selection variance on the selection intensity. It can be seen clearly that truncation selection leads to a lower selection variance than tournament selection. The highest selection variance is obtained by exponential ranking. An interpretation of the results may be di cult as it depends on the optimization task and the kind of problem to be solved whether a high selection variance is advantageous or not. But again this graph may help to decide for the \appropriate" selection method for a particular optimization problem. If we accept the assumption that a higher variance is advantageous to the optimization process, exponential ranking selection selection reveals itself to be the best selection scheme. In [M uhlenbein and Voigt, 1995] it is stated that \if two selection selection methods have the same selection intensity, the method giving the higher standard deviation of the selected parents is to be preferred". From this point of view exponential ranking selection should be the \best" selection method. 49 Selection Method Selection Variance Tournament VT (t) r2:05+t 3:14t 32 Truncation V (T ) = 1 I (T )(I (T ) fc) Linear Ranking VR( ) = 1 I2 R( ) Exponential Ranking VE( ) ln 1:2 + 2:8414 2:225 ln Table 8.5: Comparison of the selection variance of the selection methods. 8.5 The Complement Selection Schemes: Tournament and Linear Ranking If we compare the several properties of tournament selection and linear ranking selection we observe that binary tournament behaves similar to a linear ranking selection with a very small . And indeed it is possible to prove that binary tournament and linear ranking with = 1 N have identical average behavior. Theorem 8.5.1 The expected tness distributions of linear ranking selection with = 1 N and tournament selection with t = 2 are identical, i.e. R(s; 1 N ) = T (s; 2) (8.7) Proof: R(s; 1 N )(fi) = s(fi)N 1 N 1 N 1 + 1 1 N N 1 S(fi)2 S(fi 1)2 = 1 N S(fi)2 S(fi 1)2 = T (s; 2)(fi) 2 Goldberg and Deb [Goldberg and Deb, 1991] have also shown this result, but only for the behavior of the best t individual. By this we see the complementary character of the two selection schemes. For lower selection intensities (I 1 p ) linear ranking selection is the appropriate selection mechanism as for selection intensities (I 1 p ) tournament selection is better suited. At the border the two section schemes are identical. 50 0.25 0.5 0.75 1 1.25 1.5 1.75 2 I -1 -0.8 -0.6 -0.4 -0.2 0 V(I) Figure 8.2: The dependence of the selection variance V on the selection intensity I for tournament selection (3|3), truncation selection (4 { { 4), ranking selection (? ?), and exponential ranking selection (2 { 2). Note that for tournament selection only the dotted points on the graph correspond to valid (integer) tournament sizes. 51 Chapter 9 Conclusion In this paper a uni ed and systematic approach to analyze selection methods was developed and applied to the selection schemes tournament selection, truncation selection, linear and exponential ranking selection, and proportional selection. This approach is based on the description of the population using tness distributions. Although this idea is not new, the consequent realization of this idea led to a powerful framework that gave an uni ed view of the selection schemes and allowed several up to now independently and isolated obtained aspects of these selection schemes to be derived with one single methodology. Besides some interesting features of selection schemes could be proven, e.g. the concatenation of several tournament selections (theorem 3.1.1) and the equivalence of binary tournament and linear ranking (theorem 8.5.1). Furthermore the derivation of the major characteristics of a selection scheme, i.e. the selection intensity, the selection variance and the loss of diversity, could easily be achieved with this approach. The selection intensity was used to obtain a convergence prediction of the simple genetic algorithm with uniform crossover optimizing the ONEMAX function. The comparison of the loss of diversity and the selection variance based on the selection intensity allowed for the rst time to compare \second order" properties of selection schemes. This comparison gives a well grounded basis to decide which selection scheme should be used, if the impact of these properties on the optimization process is known for the particular problem. The one exception in this paper is proportional selection, that withdraws itself from a detailed mathematical analysis. But based on some basic analysis and some empirical observations we regard proportional selection to be a very unsuited selection scheme. The presented analysis can easily be extended to other selection schemes and other properties of selection schemes. 52 Appendix A Deriving Approximation Formulas Using Genetic Programming In this chapter we describe the way the approximation formulas for the selection variance of tournament selection (3.17), the selection intensity of exponential ranking selection (6.9), and the selection variance of exponential ranking (6.10) were obtained. In general we use the same approach as Koza in his rst book on genetic programming [Koza, 1992]. Genetic Programming (GP) is an optimization method based on natural evolution similar to genetic algorithms. The major di erence is that GP uses trees to represent the individuals where GA uses bit-strings. The tree structure can represent functional dependencies or complete computer programs. Hence we can use this optimization method to obtain an analytic approximation of a data set. Given are a certain number of data points (xi; yi) and we want to nd an analytic expression that approximates the functional dependence y = u(x). The tness function is to minimize the maximum relative error over all data points (xi; yi). If an arithmetic exception occurs during the evaluation of an individual (such as division by zero) the individual is punished by a very high error score (100.000). The parameter for the optimization are: population size 10.000 maximum tree size 15 maximum number of generations 30 tournament selection with tournament size 5 reducing redundancy using marking crossover [Blickle and Thiele, 1994] 53 use of one step hill-climbing to adjust the RFPC numbers The last two items need further explanation: the marking crossover introduced in [Blickle and Thiele, 1994] works as follows. During the evaluation of the tness function all edges in the tree of the individual are marked. The edges that remain unmarked after calculating the tness value are said to be redundant, because they were never used for tness calculation. The crossover operator now only selects the edges for crossover that are marked, because only changes at these edges may lead to individuals with a di erent tness score. With this approach an increase in performance of almost 50% for the 6-multiplexer problem was achieved [Blickle and Thiele, 1994]. \One step hill-climbing" works in the following way: after evaluation the tness of an individual, successively all random constants in the trees are change by a little amount . If this change leads to a better individual it is accepted, otherwise rejected. In our experiments, the setting is = 0:1. The very large population size was chosen because only small trees were allowed. No further tuning of the parameters was made, as well as no comparison of performance with other possible optimization methods (e.g. simulated annealing) as this is beyond the scope of this paper. The intention was only to nd one good approximation for each data set. The problem was programmed on a SPARC Station 20 using the YAGPLIC library [Blickle, 1995]. A run over 30 generations took about 15 minutes CPU time. The given solution were found after 15 23 generations. A.1 Approximating the Selection Variance of Tournament Selection The operators and terminal provided to the optimization method for this problem were F = fP lus; Subtract; T imes;Divide; Log; Sqrtg T = ft; ; RFPCg were RFPC is a random oating point number in the range from [-10,10] once determined at creation time of the population. These sets were chosen with some knowledge in mind about the possible dependency. The following approximation was found with maximum relative error of 1.66%: VT (t) Sqrt[Divide[Plus[Sqrt[Plus[Log[Pi],Pi]],t],Times[Times[t,Pi],Sqrt[t]]]]. After simplifying this expression and some local ne tuning of the constants (3.17) 54 is obtained that approximates the selection variance of tournament selection with an relative error of less than 1.6% for t 2 f1; : : : ; 30g: VT (t) s2:05 + t 3:14t 3 2 (3:17) Table A.1 displays the numerical calculated values for the selection variance, the approximation by (3.17) and the relative error of the approximation for the tournament sizes t = 1; : : : ; 30. A.2 Approximating the Selection Intensity of Exponential Ranking Selection The operators and terminal provided to the optimization method for this problem were F = fP lus; Subtract; T imes;Divide; Log; Sqrt; Expg T = f ;RFPCg were RFPC is a random oating point number in the range from [-10,10] once determined at creation time of the population. The GP found the following approximation with an relative error of 6.3 %: IE( )=Divide[Log[Log[Divide[ , ]]], Times[Sqrt[Power[Plus[8.040000, 5.468000], ]], Exp[Times[3.508000, 0.150000]]]]. After some local ne tuning of the real constants and some simpli cations (6.9) is obtained, that approximates the selection intensity of exponential ranking selection with and relative error of less than 5.8%. IE( ) 0:588ln ln 3:69 (6:9) Table A.2 displays again the numerical calculated values for the selection intensity, the approximation by (6.9) and the relative error of the approximation. A.3 Approximating the Selection Variance of Exponential Ranking Selection The operators and terminal provided to the optimization method for this problem were F = fP lus; Subtract; T imes;Divide; Log; Sqrt; Expg T = f ;RFPCg 55 were RFPC is a random oating point number in the range from [-10,10] once determined at creation time of the population. One solution with an accuracy of 5.4% found by GP was VE( ) Log[Subtract[Divide[2.840000,Subtract[Times[Exp[0.796000], ],Log[ ]]],1.196000]]. Further manual tuning of the constants led to approximation formula 6.10: VE( ) ln 1:2 + 2:8414 2:225 ln (6:10) Table A.3 displays again the numerical calculated values for the selection variance, the approximation by (6.10) and the relative error of the approximation. 56 Tournament size t VT (t) Approximation (3.17) rel. Error in % 1 1 0.985314748118875 1.468525188112524 2 0.6816901138160949 0.6751186081382552 0.964001904186694 3 0.5594672037973512 0.5561984979283774 0.5842533479688358 4 0.4917152368747342 0.4906341319420119 0.2198640293503141 5 0.4475340690206629 0.4480145502588547 0.1073619354261157 6 0.4159271089832759 0.4175510364657733 0.3904355949452292 7 0.3919177761267493 0.394389935195578 0.6307851338769677 8 0.3728971432867331 0.3760023889838275 0.832735179927804 9 0.357353326357783 0.3609311128657064 1.00119020701134 10 0.3443438232607686 0.3482720218045281 1.140777989441309 11 0.3332474427030835 0.3374316422588417 1.255583395274936 12 0.3236363870477149 0.3280026037472064 1.349111803935622 13 0.3152053842122778 0.3196949671601049 1.424335741931216 14 0.3077301024704087 0.3122960960698358 1.483765664383183 15 0.3010415703137873 0.3056460664995608 1.52952171388625 16 0.2950098090102839 0.299621986989894 1.563398178210858 17 0.2895330036877659 0.2941276564766719 1.586918496469898 18 0.2845301297414324 0.2890865414042389 1.601381079377116 19 0.2799358049283933 0.2844368844693661 1.607897047011976 20 0.2756966156185853 0.2801282213768255 1.60742116775606 21 0.2717684436810235 0.2761188509706837 1.600777202362149 22 0.2681144875238161 0.2723739652629051 1.588678694101006 23 0.2647037741277227 0.2688642452925616 1.571746069185771 24 0.2615098815029825 0.2655647916870945 1.55057627681493 25 0.2585107005876581 0.2624542995840844 1.525507063135685 26 0.2556866644747772 0.2595144145665026 1.49704721581323 27 0.2530210522851858 0.2567292244751288 1.465558757444203 28 0.2504992994478195 0.2540848544627421 1.431363290367017 29 0.2481086538596352 0.2515691413740026 1.394746801667437 30 0.245837896441101 0.249171369706327 1.355963955713685 Table A.1: Approximation of the selection variance of tournament selection. 57 IE( ) Approximation (6.9) rel. error in % 1: 10 20 2.21187 2.26634 2.46276 1: 10 19 2.19127 2.23693 2.08369 1: 10 18 2.16938 2.20597 1.6866 1: 10 17 2.14604 2.17329 1.26989 1: 10 16 2.12104 2.13869 0.83183 1: 10 15 2.09416 2.10192 0.370482 1: 10 14 2.0651 2.06269 0.116247 1: 10 13 2.03349 2.02067 0.630581 1: 10 12 1.99889 1.97541 1.17477 1: 10 11 1.96069 1.92637 1.75086 1: 10 10 1.91813 1.87286 2.36022 1: 10 9 1.87015 1.81399 3.00251 1: 10 8 1.81525 1.74857 3.67343 1: 10 7 1.7513 1.67495 4.35951 1: 10 6 1.67494 1.59077 5.02548 0.00001 1.58068 1.49248 5.58 0.0001 1.4585 1.37426 5.77587 0.001 1.28826 1.22496 4.91391 0.01 1.02756 1.01518 1.20517 0.0158489 0.958452 0.959374 0.0961498 0.0251189 0.88211 0.895999 1.57453 0.0398107 0.797944 0.823028 3.14361 0.0630957 0.705529 0.738058 4.61058 0.1 0.604719 0.638636 5.60873 0.125893 0.55122 0.58292 5.75089 0.158489 0.495745 0.523127 5.52332 0.199526 0.438398 0.459482 4.8093 0.251189 0.379315 0.392562 3.49231 0.316228 0.318668 0.323416 1.49006 0.398107 0.256659 0.253675 1.16253 0.501187 0.193519 0.185613 4.08562 0.630957 0.129509 0.122102 5.71875 0.794328 0.0649044 0.0663754 2.26635 Table A.2: Approximation of the selection intensity of exponential ranking selection. 58 VE( ) Approximation (6.10) rel. error in % 1: 10 20 0.224504 0.232462 3.54445 1: 10 19 0.227642 0.235033 3.24672 1: 10 18 0.231048 0.237881 2.95725 1: 10 17 0.234767 0.241055 2.67849 1: 10 16 0.238849 0.244614 2.41344 1: 10 15 0.243361 0.248632 2.16573 1: 10 14 0.248386 0.253204 1.93978 1: 10 13 0.254032 0.258454 1.74096 1: 10 12 0.260441 0.264544 1.57569 1: 10 11 0.267807 0.271695 1.45147 1: 10 10 0.276403 0.280208 1.37665 1: 10 9 0.286619 0.290515 1.35936 1: 10 8 0.299052 0.303252 1.40448 1: 10 7 0.314661 0.319393 1.50382 1: 10 6 0.335109 0.340517 1.61388 0.00001 0.363607 0.36936 1.58242 0.0001 0.407156 0.411119 0.973227 0.001 0.482419 0.47699 1.12538 0.01 0.624515 0.595566 4.63544 0.0158489 0.664523 0.631164 5.01997 0.0251189 0.70839 0.672819 5.02134 0.0398107 0.755421 0.721681 4.46638 0.0630957 0.804351 0.778705 3.18843 0.1 0.853263 0.843853 1.1029 0.125893 0.876937 0.878753 0.207166 0.158489 0.899606 0.914171 1.61901 0.199526 0.920882 0.948648 3.01517 0.251189 0.940366 0.979962 4.21072 0.316228 0.957663 1.00499 4.94227 0.398107 0.972403 1.0198 4.87463 0.501187 0.98425 1.02009 3.6411 0.630957 0.992927 1.00213 0.927139 0.794328 0.998221 0.964101 3.41803 Table A.3: Approximation of the selection variance of exponential ranking selection. 59 Appendix B Used Integrals Z xe x2 2 = e x2 2 (B.1) Z 1 1 e x2 2 dx = p2 (B.2) Z 1 1 e x2 2 Z x 1 e y2 2 dy dx = (B.3) Z 1 1 xe x2 2 = 0 (B.4) Z 1 1 xe x2 2 Z x 1 e y2 2 dy dx = p (B.5) Z 1 1 xe x2 2 Z x 1 e y2 2 dy 2 dx = p2 (B.6) Z 1 1 x2e x2 2 dx = p2 (B.7) Z 1 1 x2e x2 2 Z x 1 e y2 2 dy dx = (B.8) Z 1 1 t e x2 2 Z x 1 e y2 2 dy t 1 dx = (2 ) t 2 (B.9) 60 Appendix C Glossary Parameter of Exponential Ranking ( = cN) c Basis for Exponential Ranking Selection Probability of worst t Individual in Ranking Selection f Fitness Value f(J) Fitness Value of Individual J G( ; ) Gaussian Distribution with Mean and Variance 2 I Selection Intensity J Individual J Space of all Possible Individuals M Average Population Fitness N Population Size Selection Method E Exponential Ranking Selection T Tournament Selection Truncation Selection P Proportional Selection R Ranking Selection pc Crossover Probability pd Loss of Diversity P Population R Reproduction Rate R Set of Real Numbers 61 s (Discrete) Fitness Distributions (Continuous) Fitness DistributionS Cumulative (Discrete) Fitness DistributionS Cumulative (Continuous) Fitness DistributionMean Variance of the Population Fitnesst Tournament SizeT Truncation ThresholdGenerationc Convergence Time (in Generations) for the ONEMAX ExampleV Selection VarianceZ Set of Integers62 Bibliography[Arnold et al., 1992] B.C. Arnold, N. Balakrishnan, and H. N. Nagaraja. A FirstCourse in Order Statistics. Wiley Series in Probability and MathematicalStatistics, Wiley, New York, 1992.[Back, 1994] Thomas Back. Selective pressure in evolutionary algorithms: Acharacterization of selection mechanisms. In Proceedings of the First IEEEConference on Evolutionary Computation. IEEE World Congress on Compu-tational Intelligence (ICEC94), pages 57{62, 1994.[Back, 1995] Thomas Back. Generalized convergence models for tournament-and ( , )-selection. In L. Eshelman, editor, Proceedings of the Sixth Interna-tional Conference on Genetic Algorithms (ICGA95), San Francisco, CA, 1995.Morgan Kaufmann Publishers.[Baker, 1987] J. E. Baker. Reducing bias and ine ciency in the selection algo-rithm. In Proceedings of the Second International Conference on Genetic Al-gorithms, pages 14{21, Cambridge, MA, 1987. Lawrence Erlbaum Associates.[Baker, 1989] J. E. Baker. An Analysis of the E ects of Selection in GeneticAlgorithms. PhD thesis, Graduate School of Vanderbilt University, Nashville,Tennessee, 1989.[Blickle and Thiele, 1994] Tobias Blickle and Lothar Thiele. Genetic program-ming and redundancy. In J. Hopf, editor, Genetic Algorithms within the Frame-work of Evolutionary Computation (Workshop at KI-94, Saarbrucken), pages33{38. Max-Planck-Institut fur Informatik (MPI-I-94-241), 1994.[Blickle and Thiele, 1995] Tobias Blickle and Lothar Thiele. A mathematicalanalysis of tournament selection. In L. Eshelman, editor, Proceedings of theSixth International Conference on Genetic Algorithms (ICGA95), San Fran-cisco, CA, 1995. Morgan Kaufmann Publishers.[Blickle, 1995] Tobias Blickle. YAGPLIC User Manual. Computer Engineeringand Communication Networks Lab (TIK), Swiss Federal Institute of Technol-ogy (ETH) Zurich, Gloriastrasse 35, CH-8092 Zurich, 1995.63 [Brill et al., 1992] F. Z. Brill, D. E. Brown, and W. N. Martin. Fast geneticselection of features for neural network classi ers. IEEE Transactions on NeuralNetworks, 2(3):324{328, March 1992.[Bulmer, 1980] M.G. Bulmer. The Mathematical Theory of Quantitative Genet-ics. Clarendon Press, Oxford, 1980.[Crow and Kimura, 1970] J.F. Crow and M. Kimura. An Introduction to Popu-lation Genetics Theory. Harper and Row, New York, 1970.[de la Maza and Tidor, 1993] Michael de la Maza and Bruce Tidor. An analy-sis of selection procedures with particular attention paid to proportional andbolzmann selection. In Stefanie Forrest, editor, Proceedings of the Fifth Inter-national Conference on Genetic Algorithms, pages 124{131, San Mateo, CA,1993. Morgan Kaufmann Publishers.[Goldberg and Deb, 1991] David E. Goldberg and Kalyanmoy Deb. A compar-ative analysis of selection schemes used in genetic algorithms. In G. Rawlins,editor, Foundations of Genetic Algorithms, pages 69{93, San Mateo, 1991.Morgan Kaufmann.[Goldberg, 1989] David E. Goldberg. Genetic Algorithms in Search, Optimiza-tion and Machine Learning. Addison-Wesley Publishing Company, Inc., Read-ing, Massachusetts, 1989.[Grefenstette and Baker, 1989] John J. Grefenstette and James E. Baker. Howgenetic algorithms work: A critical look at implicit parallelism. In J. DavidScha er, editor, Proceedings of the Third International Conference on GeneticAlgorithms, pages 20 { 27, San Mateo, CA, 1989. Morgan Kaufmann Publish-ers.[Henrici, 1977] P. Henrici. Applied and Computational Complex Analysis, vol-ume 2. A Wiley-Interscience Series of Texts, Monographs, and Tracts, 1977.[Holland, 1975] John H. Holland. Adaption in Natural and Arti cial Systems.The University of Michigan Press, Ann Arbor, MI, 1975.[Koza, 1992] John R. Koza. Genetic programming: on the programming ofcomputers by means of natural selection. The MIT Press, Cambridge, Mas-sachusetts, 1992.[Muhlenbein and Schlierkamp-Voosen, 1993]Heinz Muhlenbein and Dirk Schlierkamp-Voosen. Predictive models for thebreeder genetic algorithm. Evolutionary Computation, 1(1), 1993.64 [Muhlenbein and Voigt, 1995] Heinz Muhlenbein and Hans-Michael Voigt. Genepool recombination in genetic algorithms. In I. H. Osman and J. P. Kelly,editors, Proceedings of the Metaheuristics Inter. Conf., Norwell, 1995. KluwerAcademic Publishers.[Shapiro et al., 1994] Jonathan Shapiro, Adam Prugel-Bennett, and MagnusRattray. A statistical mechanical formulation of the dynamics of genetic algo-rithms. In Terence C. Fogarty, editor, Evolutionary Computing AISB Work-shop. Springer , LNCS 865, 1994.[Thierens and Goldberg, 1994a] D. Thierens and D. Goldberg. Convergencemodels of genetic algorithm selection schemes. In Yuval Davidor, Hans-PaulSchwefel, and Reinhard Manner, editors, Parallel Problem Solving from NaturePPSN III, pages 119 { 129, Berlin, 1994. Lecture Notes in Computer Science866 Springer-Verlag.[Thierens and Goldberg, 1994b] Dirk Thierens and David Goldberg. Elitist re-combination: an integrated selection recombination ga. In Proceedings of theFirst IEEE Conference on Evolutionary Computation. IEEE World Congresson Computational Intelligence (ICEC94), pages 508{512, 1994.[Whitley, 1989] Darrell Whitley. The GENITOR algorithm and selection pres-sure: Why rank-based allocation of reproductive trials is best. In J. DavidScha er, editor, Proceedings of the Third International Conference on GeneticAlgorithms, pages 116 { 121, San Mateo, CA, 1989. Morgan Kaufmann Pub-lishers.65

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Application of Genetic Algorithms for Pixel Selection in MIA-QSAR Studies on Anti-HIV HEPT Analogues for New Design Derivatives

Quantitative structure-activity relationship (QSAR) analysis has been carried out with a series of 107 anti-HIV HEPT compounds with antiviral activity, which was performed by chemometrics methods. Bi-dimensional images were used to calculate some pixels and multivariate image analysis was applied to QSAR modelling of the anti-HIV potential of HEPT analogues by means of multivariate calibration,...

متن کامل

Application of Genetic Algorithms for Pixel Selection in MIA-QSAR Studies on Anti-HIV HEPT Analogues for New Design Derivatives

Quantitative structure-activity relationship (QSAR) analysis has been carried out with a series of 107 anti-HIV HEPT compounds with antiviral activity, which was performed by chemometrics methods. Bi-dimensional images were used to calculate some pixels and multivariate image analysis was applied to QSAR modelling of the anti-HIV potential of HEPT analogues by means of multivariate calibration,...

متن کامل

Combination of Feature Selection and Learning Methods for IoT Data Fusion

In this paper, we propose five data fusion schemes for the Internet of Things (IoT) scenario,which are Relief and Perceptron (Re-P), Relief and Genetic Algorithm Particle Swarm Optimization (Re-GAPSO), Genetic Algorithm and Artificial Neural Network (GA-ANN), Rough and Perceptron (Ro-P)and Rough and GAPSO (Ro-GAPSO). All the schemes consist of four stages, including preprocessingthe data set ba...

متن کامل

Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as: GA, PSO, ACO, SA and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR f...

متن کامل

Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as: GA, PSO, ACO, SA and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR f...

متن کامل

Comparison of two integration schemes for a micropolar plasticity model

Micropolar plasticity provides the capability to carry out post-failure simulations of geo-structures due to microstructural considerations and embedded length scale in its formulation. An essential part of the numerical implementation of a micropolar plasticity model is the integration of the rate constitutive equations. Efficiency and robustness of the implementation hinge on the type of int...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995